30 research outputs found

    A High-level strategy for C-net discovery

    Get PDF
    Causal nets have been recently proposed as a suitable model for process mining, due to their declarative semantics and compact representation. However, the discovery of causal nets from a log is a complex problem. The current algorithmic support for the discovery of causal nets comprises either fast but inaccurate methods (compromising quality), or accurate algorithms that are computational demanding, thus limiting the size of the inputs they can process. In this paper a high-level strategy is presented, which uses appropriate clustering techniques to split the log into pieces, and benefits from the additive nature of causal nets. This allows amalgamating structurally the discovered Causal net of each piece to derive a valuable model. The claims in this paper are accompanied with experimental results showing the significance of the high-level strategy presented.Postprint (published version

    Amending C-net discovery algorithms

    Get PDF
    As the complexity of information systems evolves, there is a growing interest in defining suitable process models than can overcome the limitations of traditional formalisms like Petri nets or related. Causal nets may be one of such promising process models, since important characteristics of their semantics deviate from the ones in the literature. Due to their novelty, very few discovery algorithms exist for Causal nets. Moreover, the existing ones offer very few guarantees regarding the outcome produced. This paper describes an algorithm that can be applied as a second step to any discovery technique to significantly improve the quality of the final Causal net derived. We have tested the technique in combination with the existing algorithms in the literature on several benchmarks, noticing a considerable improvement in all of them.Postprint (published version

    Reducing event variability in logs by clustering of word embeddings

    Get PDF
    Several business-to-business and business-to-consumer services are provided as a human-to-human conversation in which the provider representative guides the conversation towards its resolution based on her experience, following internal guidelines. Several attempts to automatize these services are becoming popular, but they are currently limited to procedures and objectives set during design step. Process discovery techniques could provide the necessary mechanisms to monitor event logs derived from textual conversations and expand the capabilities of conversational bots. Still, variability of textual messages hinders the utility of process discovery techniques by producing non-understandable unstructured process models. In this paper, we propose the usage of word embedding for combining events that have a semantically similar name.Peer ReviewedPostprint (author's final draft

    Next stop 'NoOps': enabling cross-system diagnostics through graph-based composition of logs and metrics

    Get PDF
    Performing diagnostics in IT systems is an increasingly complicated task, and it is not doable in satisfactory time by even the most skillful operators. Systems and their architecture change very rapidly in response to business and user demand. Many organizations see value in the maintenance and management model of NoOps that stands for No Operations. One of the implementations of this model is a system that is maintained automatically without any human intervention. The path to NoOps involves not only precise and fast diagnostics but also reusing as much knowledge as possible after the system is reconfigured or changed. The biggest challenge is to leverage knowledge on one IT system and reuse this knowledge for diagnostics of another, different system. We propose a framework of weighted graphs which can transfer knowledge, and perform high-quality diagnostics of IT systems. We encode all possible data in a graph representation of a system state and automatically calculate weights of these graphs. Then, thanks to the evaluation of similarity between graphs, we transfer knowledge about failures from one system to another and use it for diagnostics. We successfully evaluate the proposed approach on Spark, Hadoop, Kafka and Cassandra systems.Peer ReviewedPostprint (author's final draft

    Dynamic flight plan design for UAS remote sensing applications

    Get PDF
    The development of Flight Control Systems (FCS) coupled with the availability of other Commercial Off-The Shelf (COTS) components is enabling the introduction of Unmanned Aircraft Systems (UAS) into the civil market. UAS have great potential to be used in a wide variety of civil applications such as environmental applications, emergency situations, surveillance tasks and more. In general, they are specially well suited for the so-called D-cube operations (Dirty, Dull or Dangerous). Current technology greatly facilitates the construction of UAS. Sophisticated flight con- trol systems also make them accessible to end users with little aeronautical expertise. How- ever, we believe that for its successful introduction into the civil market, progress needs to be made to deliver systems able to perform a wide variety of missions with minimal reconfiguration and with reduced operational costs. Most current flight plan specification mechanisms consist in a simple list of waypoints, an approach that has important limitations. This paper proposes a new specification mech- anism with semantically richer constructs that will enable the end user to specify more complex flight plans. The proposed formalism provides means for specifying iterative be- havior, conditional branching and other constructs to dynamically adapt the flight path to mission circumstances. Collaborating with the FCS, a new module on-board the UAS will be in charge of executing these plans. This research also studies how the proposed flight plan structure can be tailored to the specific needs of remote sensing. For these type of applications well structured and efficient area and perimeter scanning is mandatory. In this paper we introduce several strategies focused to optimize the scanning process for tactical or mini UAS. The paper also presents a prototype implementation of this module and the results obtained in simulations.Postprint (published version

    Amending C-net discovery algorithms

    No full text
    As the complexity of information systems evolves, there is a growing interest in defining suitable process models than can overcome the limitations of traditional formalisms like Petri nets or related. Causal nets may be one of such promising process models, since important characteristics of their semantics deviate from the ones in the literature. Due to their novelty, very few discovery algorithms exist for Causal nets. Moreover, the existing ones offer very few guarantees regarding the outcome produced. This paper describes an algorithm that can be applied as a second step to any discovery technique to significantly improve the quality of the final Causal net derived. We have tested the technique in combination with the existing algorithms in the literature on several benchmarks, noticing a considerable improvement in all of them

    Relative timing verification revisited

    No full text
    Technical Report UPC-DAC-RR-CAP-2008-39Timed verification is a difficult problem. One of the ways in which it can be simplified is by using iterative refinement techniques. Here we investigate the advantages and shortcomings of this approach when combined with relative timing reasoning

    Algorithms to mesh 2D CSG polygonal domains from previously meshed CSG primitives

    No full text
    In this work we report on two algorithms that mesh a 2D CSG poligonal domain built from previously meshed primitives. The resulting mesh is computed from the component meshes. Both algorithms first compute the set of cells of each mesh that overlap. One algorithm uses a propagation technique. Every two nodes in the overlapping area that are closer enough are collapsed into a new node. The final state is reached when nodes no longer collapse. The other algorithm is based on a relaxation technique. An energy function is associated with each node in the areas where the meshes overlap. The minimization of the total energy function leads the nodes to an steady state that defines the final mesh. We give some examples that illustrate how the algorithms work.Preprin

    Relative timing verification revisited

    No full text
    Technical Report UPC-DAC-RR-CAP-2008-39Timed verification is a difficult problem. One of the ways in which it can be simplified is by using iterative refinement techniques. Here we investigate the advantages and shortcomings of this approach when combined with relative timing reasoning.Preprin

    Relative timing verification revisited

    No full text
    Technical Report UPC-DAC-RR-CAP-2008-39Timed verification is a difficult problem. One of the ways in which it can be simplified is by using iterative refinement techniques. Here we investigate the advantages and shortcomings of this approach when combined with relative timing reasoning
    corecore